Moral uncertainty vs related concepts
Overview
How important is the well‐being of non‐human animals compared with the well‐being of humans?
How much should we spend on helping strangers in need?
How much should we care about future generations?
How should we weigh reasons of autonomy and respect against reasons of benevolence?
Few could honestly say that they are fully certain about the answers to these pressing moral questions. Part of the reason we feel less than fully certain about the answers has to do with uncertainty about empirical facts. We are uncertain about whether fish can feel pain, whether we can really help strangers far away, or what we could do for people in the far future. However, sometimes, the uncertainty is fundamentally moral. [...] Even if were to come to know all the relevant non‐normative facts, we could still waver about whether it is right to kill an animal for a very small benefit for a human, whether we have strong duties to help strangers in need, and whether future people matter as much as current ones. Fundamental moral uncertainty can also be more general as when we are uncertain about whether a certain moral theory is correct. (Bykvist; emphasis added)[1]
I consider the above quote a great starting point for understanding what moral uncertainty is; it gives clear examples of moral uncertainties, and contrasts these with related empirical uncertainties. From what I’ve seen, a lot of academic work on moral uncertainty essentially opens with something like the above, then notes that the rational approach to decision-making under empirical uncertainty is typically considered to be expected utility theory, then discusses various approaches for decision-making under moral uncertainty.
That’s fair enough, as no one article can cover everything, but it also leaves open some major questions about what moral uncertainty actually is.[2] These include:
-
How, more precisely, can we draw lines between moral and empirical uncertainty?
-
What are the overlaps and distinctions between moral uncertainty and other related concepts, such as normative, metanormative, decision-theoretic, and metaethical uncertainty, as well as value pluralism?
My prior post answers similar questions about how morality overlaps with and differs from related concepts, and may be worth reading before this one.
-
Is what we “ought to do” under moral uncertainty an objective or subjective matter?
-
Is what we “ought to do” under moral uncertainty a matter of rationality or morality?
-
Are we talking about “moral risk” or about “moral (Knightian) uncertainty” (if such a distinction is truly meaningful)?
-
What “types” of moral uncertainty are meaningful for moral antirealists and/or subjectivists?[3]
In this post, I collect and summarise ideas from academic philosophy and the LessWrong and EA communities in an attempt to answer the first two of the above questions (or to at least clarify what the questions mean, and what the most plausible answers are). My next few posts will do the same for the remaining questions.
I hope this will benefit readers by facilitating clearer thinking and discussion. For example, a better understanding of the nature and types of moral uncertainty may aid in determining how to resolve (i.e., reduce or clarify) one’s uncertainty, which I’ll discuss two posts from now. (How to make decisions given moral uncertainty is discussed later in this sequence.)
Epistemic status: The concepts covered here are broad, fuzzy, and overlap in various ways, making definitions and distinctions between them almost inevitably debatable. Additionally, I’m not an expert in these topics (though I have now spent a couple weeks mostly reading about them). I’ve tried to mostly collect, summarise, and synthesise existing ideas. I’d appreciate feedback or comments in relation to any mistakes, unclear phrasings, etc. (and just in general!).
Empirical uncertainty
In the quote at the start of this post, Bykvist (the author) seemed to imply that it was easy to identify which uncertainties in that example were empirical and which were moral. However, in many cases, the lines aren’t so clear. This is perhaps most obvious with regards to, as Christian Tarsney puts it:
Certain cases of uncertainty about moral considerability (or moral status more generally) [which] turn on metaphysical uncertainties that resist easy classification as empirical or moral.
[For example,] In the abortion debate, uncertainty about when in the course of development the fetus/infant comes to count as a person is neither straightforwardly empirical nor straightforwardly moral. Likewise for uncertainty in Catholic moral theology about the time of ensoulment, the moment between conception and birth at which God endows the fetus with a human soul [...]. Nevertheless, it seems strange to regard these uncertainties as fundamentally different from more clearly empirical uncertainties about the moral status of the developing fetus (e.g., uncertainty about where in the gestation process complex mental activity, self-awareness, or the capacity to experience pain first emerge), or from more clearly moral uncertainties (e.g., uncertainty, given a certainty that the fetus is a person, whether it is permissible to cause the death of such a person when doing so will result in more total happiness and less total suffering).[4]
And there are also other types of cases in which it seems hard to find clear, non-arbitrary lines between moral and empirical uncertainties (some of which Tarsney [p. 140-146] also discusses).[5] Altogether, I expect drawing such lines will quite often be difficult.
Fortunately, we may not actually need to draw such lines anyway. In fact, as I discuss in my post on making decisions under both moral and empirical uncertainty, many approaches for handling moral uncertainty were consciously designed by analogy to approaches for handling empirical uncertainty, and it seems to me that they can easily be extended to handle both moral and empirical uncertainty, without having to distinguish between those “types” of uncertainty.[6][7]
The situation is a little less clear when it comes to resolving one’s uncertainty (rather than just making decisions given uncertainty). It seems at first glance that you might need to investigate different “types” of uncertainty in different ways. For example, if I’m uncertain whether fish react to pain in a certain way, I might need to read studies about that, whereas if I’m uncertain what “moral status” fish deserve (even assuming that I know all the relevant empirical facts), then I might need to engage in moral reflection. However, it seems to me that the key difference in such examples is what the uncertainties are actually about, rather than specifically whether a given uncertainty should be classified as “moral” or “empirical”.
(It’s also worth quickly noting that the topic of “cluelessness” is only about empirical uncertainty—specifically, uncertainty regarding the consequences that one’s actions will have. Cluelessness thus won’t be addressed in my posts on moral uncertainty, although I do plan to later write about it separately.)
Normative uncertainty
As I noted in my prior post:
A normative statement is any statement related to what one should do, what one ought to do, which of two things are better, or similar. [...] Normativity is thus the overarching category (superset) of which things like morality, prudence [essentially meaning the part of normativity that has to do with one’s own self-interest, happiness, or wellbeing], and arguably rationality are just subsets.
In the same way, normative uncertainty is a broader concept, of which moral uncertainty is just one component. Other components could include:
prudential uncertainty
decision-theoretic uncertainty (covered below)
metaethical uncertainty (also covered below) - although perhaps it’d make more sense to see metaethical uncertainty as instead just feeding into one’s moral uncertainty
Despite this, academic sources seem to commonly either:
focus only on moral uncertainty, or
state or imply that essentially the same approaches for decision-making will work for both moral uncertainty in particular and normative uncertainty in general (which seems to me a fairly reasonable assumption).
On this matter, Tarsney writes:
Fundamentally, the topic of the coming chapters will be the problem of normative uncertainty, which can be roughly characterized as uncertainty about one’s objective reasons that is not a result of some underlying empirical uncertainty (uncertainty about the state of concretia). However, I will confine myself almost exclusively to questions about moral uncertainty: uncertainty about one’s objective moral reasons that is not a result of etc etc. This is in part merely a matter of vocabulary: “moral uncertainty” is a bit less cumbersome than “normative uncertainty,” a consideration that bears some weight when the chosen expression must occur dozens of times per chapter. It is also in part because the vast majority of the literature on normative uncertainty deals specifically with moral uncertainty, and because moral uncertainty provides more than enough difficult problems and interesting examples, so that there is no need to venture outside the moral domain.
Additionally, however, focusing on moral uncertainty is a useful simplification that allows us to avoid difficult questions about the relationship between moral and non-moral reasons (though I am hopeful that the theoretical framework I develop can be applied straightforwardly to normative uncertainties of a non-moral kind). For myself, I have no taste for the moral/non-moral distinction: To put it as crudely and polemically as possible, it seems to me that all objective reasons are moral reasons. But this view depends on substantive normative ethical commitments that it is well beyond the scope of this dissertation to defend. [...]
If one does think that all reasons are moral reasons, or that moral reasons always override non-moral reasons, then a complete account of how agents ought to act under moral uncertainty can be given without any discussion of non-moral reasons (Lockhart, 2000, p. 16). To the extent that one does not share either of these assumptions, theories of choice under moral uncertainty must generally be qualified with “insofar as there are no relevant non-moral considerations.”
Somewhat similarly, this sequence will nominally focus on moral uncertainty, even though:
some of the work I’m drawing on was nominally focused on normative uncertainty (e.g., Will MacAskill’s thesis)
I intend most of what I say to be fairly easily generalisable to normative uncertainty more broadly.
Metanormative uncertainty
In MacAskill’s thesis, he writes that metanormativism is “the view that there are second-order norms that govern action that are relative to a decision-maker’s uncertainty about first-order normative claims. [...] The central metanormative question is [...] about which option it’s appropriate to choose [when a decision-maker is uncertain about which first-order normative theory to believe in]”. MacAskill goes on to write:
A note on terminology: Metanormativism isn’t about normativity, in the way that meta-ethics is about ethics, or that a meta-language is about a language. Rather, ‘meta’ is used in the sense of ‘over’ or ‘beyond’
In essence, metanormativism focuses on what metanormative theories (or “approaches”) should be used for making decisions under normative uncertainty.
We can therefore imagine being metanormatively uncertain: uncertain about what metanormative theories to use for making decisions under normative uncertainty. For example:
You’re normatively uncertain if you see multiple (“first-order”) moral theories as possible and these give conflicting suggestions.
You’re _meta_normatively uncertain if you’re also unsure whether the best approach for deciding what to do given this uncertainty is the “My Favourite Theory” approach or the “Maximising Expected Choice-worthiness” approach (both of which are explained later in this sequence).
This leads inevitably to the following thought:
It seems that, just as we can suffer [first-order] normative uncertainty, we can suffer [second-order] metanormative uncertainty as well: we can assign positive probability to conflicting [second-order] metanormative theories. [Third-order] Metametanormative theories, then, are collections of claims about how we ought to act in the face of [second-order] metanormative uncertainty. And so on. In the end, it seems that the very existence of normative claims—the very notion that there are, in some sense or another, ways “one ought to behave”—organically gives rise to an infinite hierarchy of metanormative uncertainty, with which an agent may have to contend in the course of making a decision. (Philip Trammell)
I refer readers interested in this possibility of infinite regress—and potential solutions or reasons not to worry—to Trammell, Tarsney, and MacAskill (p. 217-219). (I won’t discuss those matters further here, and I haven’t properly read those Trammell or Tarsney papers myself.)
Decision-theoretic uncertainty
(Readers who are unfamiliar with the topic of decision theories may wish to read up on that first, or to skip this section.)
MacAskill writes:
Given the trenchant disagreement between intelligent and well-informed philosophers, it seems highly plausible that one should not be certain in either causal or evidential decision theory. In light of this fact, Robert Nozick briefly raised an interesting idea: that perhaps one should take decision-theoretic uncertainty into account in one’s decision-making.
This is precisely analogous to taking uncertainty about first-order moral theories into account in decision-making. Thus, decision-theoretic uncertainty is just another type of normative uncertainty. Furthermore, arguably, it can be handled using the same sorts of “metanormative theories” suggested for handling moral uncertainty (which are discussed later in this sequence).
Chapter 6 of MacAskill’s thesis is dedicated to discussion of this matter, and I refer interested readers there. For example, he writes:
metanormativism about decision theory [is] the idea that there is an important sense of ‘ought’ (though certainly not the only sense of ‘ought’) according which a decision-maker ought to take decision-theoretic uncertainty into account. I call any metanormative theory that takes decision-theoretic uncertainty into account a type of meta decision theory [- in] contrast to a metanormative view according to which there are norms that are relative to moral and prudential uncertainty, but not relative to decision-theoretic uncertainty.[8]
Metaethical uncertainty
While normative ethics addresses such questions as “What should I do?”, evaluating specific practices and principles of action, meta-ethics addresses questions such as “What is goodness?” and “How can we tell what is good from what is bad?”, seeking to understand the nature of ethical properties and evaluations. (Wikipedia)
To illustrate, normative (or “first-order”) ethics involves debates such as “Consequentialist or deontological theories?”, while _meta_ethics involves debates such as “Moral realism or moral antirealism?” Thus, in just the same way we could be uncertain about first-order ethics (morally uncertain), we could be uncertain about metaethics (metaethically uncertain).
It seems that metaethical uncertainty is rarely discussed; in particular, I’ve found no detailed treatment of how to make decisions under metaethical uncertainty. However, there is one brief comment on the matter in MacAskill’s thesis:
even if one endorsed a meta-ethical view that is inconsistent with the idea that there’s value in gaining more moral information [e.g., certain types of moral antirealism], one should not be certain in that meta-ethical view. And it’s high-stakes whether that view is true — if there are moral facts out there but one thinks there aren’t, that’s a big deal! Even for this sort of antirealist, then, there’s therefore value in moral information, because there’s value in finding out for certain whether that meta-ethical view is correct.
It seems to me that, if and when we face metaethical uncertainties that are relevant to the question of what we should actually do, we could likely use basically the same approaches that are advised for decision-making under moral uncertainty (which I discuss later in this sequence).[9]
Moral pluralism
A different matter that could appear similar to moral uncertainty is moral pluralism (aka value pluralism, aka pluralistic moral theories). According to SEP:
moral pluralism [is] the view that there are many different moral values.
Commonsensically we talk about lots of different values—happiness, liberty, friendship, and so on. The question about pluralism in moral theory is whether these apparently different values are all reducible to one supervalue, or whether we should think that there really are several distinct values.
MacAskill notes that:
Someone who [takes a particular expected-value-style approach to decision-making] under uncertainty about whether only wellbeing, or both knowledge and wellbeing, are of value looks a lot like someone who is conforming with a first-order moral theory that assigns both wellbeing and knowledge value.
In fact, one may even decide to react to moral uncertainty by just no longer having any degree of belief in each of the first-order moral theories they’re uncertain over, and instead having complete belief in a new (and still first-order) moral theory that combines those previously-believed theories.[10] For example, after discussing two approaches for thinking about the “moral weight” of different animals’ experiences, Brian Tomasik writes:
Both of these approaches strike me as having merit, and not only am I not sure which one I would choose, but I might actually choose them both. In other words, more than merely having moral uncertainty between them, I might adopt a “value pluralism” approach and decide to care about both simultaneously, with some trade ratio between the two.[11]
But it’s important to note that this really isn’t the same as moral uncertainty; the difference is not merely verbal or merely a matter of framing. For example, if Alan has complete belief in a pluralistic combination of utilitarianism and Kantianism, rather than uncertainty over the two theories:
-
Alan has no need for a (second-order) metanormative theory for decision-making under moral uncertainty, because he no longer has any moral uncertainty.
If instead Alan has less than complete belief in the pluralistic theory, then the moral uncertainty that remains is between the pluralistic theory and whatever other theories he has some belief in (rather than between utilitarianism, Kantianism, and whatever other theories the person has some belief in).
-
We can’t represent the idea of Alan updating to believe more strongly in the Kantian theory, or to believe more strongly in the utilitarian theory.[12]
-
Relatedly, we’re no longer able to straightforwardly apply the idea of value of information to things that may inform Alan degree of belief in each theory.[13]
Closing remarks
I hope this post helped clarify the distinctions and overlaps between moral uncertainty and related concepts. (And as always, I’d welcome any feedback or comments!) In my next post, I’ll continue exploring what moral uncertainty actually is, this time focusing on the questions:
Is what we “ought to do” under moral uncertainty an objective or subjective matter?
Is what we “ought to do” under moral uncertainty a matter of rationality or morality?
- ↩︎
For another indication of why the topic of moral uncertainty as a whole matters, see this quote from Christian Tarsney’s thesis:
The most popular method of investigation in contemporary analytic moral philosophy, the method of reflective equilibrium based on heavy appeal to intuitive judgments about cases, has come under concerted attack and is regarded by many philosophers (e.g. Singer (2005), Greene (2008)) as deeply suspect. Additionally, every major theoretical approach to moral philosophy (whether at the level of normative ethics or metaethics) is subject to important and intuitively compelling objections, and the resolution of these objections often turns on delicate and methodologically fraught questions in other areas of philosophy like the metaphysics of consciousness or personal identity (Moller, 2011, pp. 428- 432). Whatever position one takes on these debates, it can hardly be denied that our understanding of morality remains on a much less sound footing than, say, our knowledge of the natural sciences. If, then, we remain deeply and justifiably uncertain about a litany of important questions in physics, astronomy, and biology, we should certainly be at least equally uncertain about moral matters, even when some particular moral judgment is widely shared and stable upon reflection.
- ↩︎
In an earlier post which influenced this one, Kaj_Sotala wrote:
I have long been slightly frustrated by the existing discussions about moral uncertainty that I’ve seen. I suspect that the reason has been that they’ve been unclear on what exactly they mean when they say that we are “uncertain about which theory is right”—what is uncertainty about moral theories? Furthermore, especially when discussing things in an FAI [Friendly AI] context, it feels like several different senses of moral uncertainty get mixed together.
- ↩︎
In various places in this sequence, I’ll use language that may appear to endorse or presume moral realism (e.g., referring to “moral information” or to probability of a particular moral theory being “correct”). But this is essentially just for convenience; I intend this sequence to be as neutral as possible on the matter of moral realism vs antirealism (except when directly focusing on such matters).
I think that the interpretation and importance of moral uncertainty is clearest for realists, but, as I discuss in this post, I also think that moral uncertainty can still be a meaningful and important topic for many types of moral antirealist.
- ↩︎
As another example of this sort of case, suppose I want to know whether fish are “conscious”. This may seem on the face of it an empirical question. However, I might not yet know precisely what I mean by “conscious”, and I might in fact only really want to know whether fish are “conscious in a sense I would morally care about”. In this case, the seemingly empirical question becomes hard to disentangle from the (seemingly moral) question: “What forms of consciousness are morally important?”
And in turn, my answers to that question may be influenced by empirical discoveries. For example, I may initially believe that avoidance of painful stimuli demonstrates consciousness in a morally relevant sense, but then revise that belief when I learn that this behaviour can be displayed in a stimulus-response way by certain extremely simple organisms.
- ↩︎
The boundaries become even fuzzier, and may lose their meaning entirely, if one assumes the metaethical view moral naturalism, which:
refers to any version of moral realism that is consistent with [...] general philosophical naturalism. Moral realism is the view that there are objective, mind-independent moral facts. For the moral naturalist, then, there are objective moral facts, these facts are facts concerning natural things, and we know about them using empirical methods. (SEP)
This sounds to me like it would mean that all moral uncertainties are effectively empirical uncertainties, and that there’s no difference in how moral vs empirical uncertainties should be resolved or incorporated into decision-making. But note that that’s my own claim; I haven’t seen it made explicitly by writers on these subjects.
That said, one quote that seems to suggest something this claim is the following, from Tarsney’s thesis:
Most generally, naturalistic metaethical views that treat normative ethical theorizing as continuous with natural science will see first-order moral principles as at least epistemically if not metaphysically dependent on features of the empirical world. For instance, on Railton’s (1986) view, moral value attaches (roughly) to social conditions that are stable with respect to certain kinds of feedback mechanisms (like the protest of those who object to their treatment under existing social conditions). What sort(s) of social conditions exhibit this stability, given the relevant background facts about human psychology, is an empirical question. For instance, is a social arrangement in which parents can pass down large advantages to their offspring through inheritance, education, etc, more stable or less stable than one in which the state intervenes extensively to prevent such intergenerational perpetuation of advantage? Someone who accepts a Railtonian metaethic and is therefore uncertain about the first-order normative principles that govern such problems of distributive justice, though on essentially empirical grounds, seems to occupy another sort of liminal space between empirical and moral uncertainty.
Footnote 15 of this post discusses relevant aspects of moral naturalism, though not this specific question.
- ↩︎
In fact, Tarsney’s (p.140-146) discussion of the difficulty of disentangling moral and empirical uncertainties is used to argue for the merits of approaching moral uncertainty analogously to how one approaches empirical uncertainty.
- ↩︎
An alternative approach that also doesn’t require determining whether a given uncertainty is moral or empirical is the “worldview diversification” approach used by the Open Philanthropy Project. In this context, a worldview is described as representing “a combination of views, sometimes very difficult to disentangle, such that uncertainty between worldviews is constituted by a mix of empirical uncertainty (uncertainty about facts), normative uncertainty (uncertainty about morality), and methodological uncertainty (e.g. uncertainty about how to handle uncertainty [...]).” Open Phil “[puts] significant resources behind each worldview that [they] find highly plausible.” This doesn’t require treating moral and empirical uncertainty any differently, and thus doesn’t require drawing lines between those “types” of uncertainty.
- ↩︎
As with metanormative uncertainty in general, this can lead to complicated regresses. For example, there’s the possibility to construct causal meta decision theories and evidential meta decision theories, and to be uncertain over which of those meta decision theories to endorse, and so on. As above, see Trammell, Tarsney, and MacAskill (p. 217-219) for discussion of such matters.
- ↩︎
In a good, short post, Ikaxas writes:
How should we deal with metaethical uncertainty? [...] One answer is this: insofar as some metaethical issue is relevant for first-order ethical issues, deal with it as you would any other normative uncertainty. And insofar as it is not relevant for first-order ethical issues, ignore it (discounting, of course, intrinsic curiosity and any value knowledge has for its own sake).
Some people think that normative ethical issues ought to be completely independent of metaethics: “The whole idea [of my metaethical naturalism] is to hold fixed ordinary normative ideas and try to answer some further explanatory questions” (Schroeder [...]). Others [...] believe that metaethical and normative ethical theorizing should inform each other. For the first group, my suggestion in the previous paragraph recommends that they ignore metaethics entirely (again, setting aside any intrinsic motivation to study it), while for the second my suggestion recommends pursuing exclusively those areas which are likely to influence conclusions in normative ethics.
This seems to me like a good extension/application of general ideas from work on the value of information. (I’ll apply such ideas to moral uncertainty later in this sequence.)
Tarsney gives an example of the sort of case in which metaethical uncertainty is relevant to decision-making (though that’s not the point he’s making with the example):
For instance, consider an agent Alex who, like Alice, divides his moral belief between two theories, a hedonistic and a pluralistic version of consequentialism. But suppose that Alex also divides his metaethical beliefs between a robust moral realism and a fairly anemic anti-realism, and that his credence in hedonistic consequentialism is mostly or entirely conditioned on his credence in robust realism while his credence in pluralism is mostly or entirely conditioned on his credence in anti-realism. (Suppose he inclines toward a hedonistic view on which certain qualia have intrinsic value or disvalue entirely independent of our beliefs, attitudes, etc, which we are morally required to maximize. But if this view turns out to be wrong, he believes, then morality can only consist in the pursuit of whatever we contingently happen to value in some distinctively moral way, which includes pleasure but also knowledge, aesthetic goods, friendship, etc.)
- ↩︎
Or, more moderately, one could remove just some degree of belief in some subset of the moral theories that one had some degree of belief in, and place that amount of belief in a new moral theory that combines just that subset of moral theories. E.g., one may initially think utilitarianism, Kantianism, and virtue ethics each have a 33% chance of being “correct”, but then switch to believing that a pluralistic combination of utilitarianism and Kantianism is 67% likely to be correct, while virtue ethics is still 33% likely to be correct.
- ↩︎
Luke Muelhauser also appears to endorse a similar approach, though not explicitly in the context of moral uncertainty. And Kaj Sotala also seems to endorse a similar approach, though without using the term “pluralism” (I’ll discuss Kaj’s approach two posts from now). Finally, MacAskill quotes Nozick appearing to endorse a similar approach with regards to decision-theoretic uncertainty:
I [Nozick] suggest that we go further and say not merely that we are uncertain about which one of these two principles, [CDT] and [EDT], is (all by itself) correct, but that both of these principles are legitimate and each must be given its respective due. The weights, then, are not measures of uncertainty but measures of the legitimate force of each principle. We thus have a normative theory that directs a person to choose an act with maximal decision-value.
- ↩︎
The closest analog would be Alan updating his beliefs about the pluralistic theory’s contents/substance; for example, coming to believe that a more correct interpretation of the theory would lean more in a Kantian direction. (Although, if we accept that such an update is possible, it may arguably be best to represent Alan as having moral uncertainty between different versions of the pluralistic theory, rather than being certain that the pluralistic theory is “correct” but uncertain about what it says.)
- ↩︎
That said, we can still apply value of information analysis to things like Alan reflecting on how best to interpret the pluralistic moral theory (assuming again that we represent Alan as uncertain about the theory’s contents). A post later in this sequence will be dedicated to how and why to estimate the “value of moral information”.
- Improving the future by influencing actors’ benevolence, intelligence, and power by 20 Jul 2020 10:00 UTC; 76 points) (EA Forum;
- Making decisions under moral uncertainty by 1 Jan 2020 13:02 UTC; 44 points) (EA Forum;
- Metaethical Fanaticism (Dialogue) by 17 Jun 2020 12:33 UTC; 35 points) (EA Forum;
- Morality vs related concepts by 7 Jan 2020 10:47 UTC; 26 points) (
- Value uncertainty by 29 Jan 2020 20:16 UTC; 20 points) (
- Morality vs related concepts by 10 Feb 2020 8:02 UTC; 19 points) (EA Forum;
- Vive la Différence? Structural Diversity as a Challenge for Metanormative Theories by 26 May 2020 0:45 UTC; 17 points) (EA Forum;
- Robustness to fundamental uncertainty in AGI alignment by 3 Mar 2020 23:35 UTC; 14 points) (
- Moral uncertainty: What kind of ‘should’ is involved? by 13 Jan 2020 12:13 UTC; 14 points) (
- Risk and uncertainty: A false dichotomy? by 18 Jan 2020 3:09 UTC; 6 points) (
- 18 Apr 2021 9:00 UTC; 2 points) 's comment on Propose and vote on potential EA Wiki entries by (EA Forum;
- 11 Jan 2020 13:53 UTC; 1 point) 's comment on Making decisions under moral uncertainty by (
- 14 Jan 2020 12:30 UTC; 1 point) 's comment on Moral uncertainty: What kind of ‘should’ is involved? by (
- 14 Jan 2020 22:49 UTC; 1 point) 's comment on Moral uncertainty: What kind of ‘should’ is involved? by (
I like this and feel it’s a step in the right direction. I’ve been finding that difficult uncertainties are often difficult because they are actually a pile up of 2+ types of uncertainty and therefore naive attempts to resolve it without first unpacking wind up with type errors in the proposed solution.
Thanks! And yes, I’d agree with that.
Though I’d also emphasise that that’s most clearly true for, as you say, resolving the uncertainties. I currently think that recognising what type of uncertainty one is dealing with may not matter, or may matter less or matter less often, for making decisions given uncertainty (ignoring the possibility of deciding to gather more info or otherwise work towards resolving the uncertainties).
This is for the reasons I discuss in this comment. In particular (and to obnoxiously quote myself):
(This isn’t disagreeing with you at all, as you only mentioned attempts to resolve rather than act under uncertainty, but I just thought it was worth noting.)
That said, it is possible there are cases in which even procedures for decision-making under uncertainty would still require knowing what kind of uncertainty one is facing. Both Tarsney and MacAskill argue that, for decision making under moral uncertainty, different types of moral theories may need to aggregated for in different way. (E.g., you can do expected value style reasoning with theories that say “how much” better one option is than another, but perhaps not for those that only say which option is better; I discuss this here.)
It seems plausible that similar things could occur for different types of uncertainty, such that, for example, if you realise that you’re actually uncertain about decision theories rather than moral theories that changes how you should aggregate the “views” of the different theories. But this is currently my own speculation; I haven’t tried to work through the details, and haven’t seen prior work explicitly discussing matters like this (beyond the semi-related or in-passing stuff covered in this post).
I think it’s valuable because of value of information considerations. Some types of uncertainty are dramatically more reducible than others. Some will be more prone to gotchas (sign flips).
Yes, that’s part of what I mean by the “resolving uncertainties” side. Value of information has to do with the chance new information would change one’s current views, which is a matter of (partially) resolving uncertainty, rather than a matter of making decisions given current uncertainties (if we ignore for a moment the possibility of making decisions about whether to gain more info).
I’ll be writing a post that has to do with resolving uncertainties soon, and then another applying VoI to moral uncertainty. I wasn’t planning to discuss the different types of uncertainty there (I was planning to instead focus just on different subtypes of moral uncertainty). But your comments have made me think maybe it’d be worth doing so (if I can think of something useful to say, and if saying it doesn’t add more length/complexity than its worth).
This is a valuable post, certainly, and I appreciate you writing it—it lays out some (clearly very relevant) ideas in a straightforward way.
That said, most of this seems to be predicated on “moral uncertainty” being a coherent concept—and, of course, it is (so far, in the sequence) an unexplained one.[1] So, I am not yet quite sure what I think of the substantive points you describe/mention.
Some of the other concepts here seem to me to be questionable (not necessarily incoherent, just… not obviously coherent) for much the same reasons that “moral uncertainty” is. I will refrain from commenting on that in detail, for now, but may come back to it later (contingent on what I think about “moral uncertainty” itself, once I see it explained).
In any case, thank you for taking the time to write these posts!
I know that comes in the next post; I haven’t read it yet, but will shortly. I’m merely commenting as I go along.
Also, just regarding “of course, [moral uncertainty] is (so far, in the sequence) an unexplained [concept]”:
The later posts will further flesh out the concept, provide more examples, etc. But as far as I can tell, there unfortunately isn’t just one neat, simple explanation of the concept that everyone will agree to, that will make self-evident all the important points, and that won’t just rely on other terms that need explaining too. This is partly because the term is used in different ways by different people, and partly because the concept obviously involves morality and thus can’t be fully disentangled from various meta-ethical quagmires.
This is part of why I try to explain the term from multiple angles, using multiple examples, contrasting it with other terms, etc., rather than just being able to say “Moral uncertainty is...”, list four criteria, explain those, and be done with it (or something like that).
But if part of your feelings are premised on being suspicious of non-naturalistic moral realism, then perhaps the post you’ll find most useful will be the one on what moral uncertainty can mean for antirealists and subjectivists, which should hopefully be out early next week.
(I guess one way of putting this is that the explanation will unfold gradually, and really we’re talking about something a bit more like a cluster of related ideas rather than one neat simple crisp thing—it’s not that I’ve been holding the explanation of that one neat simple crisp thing back from readers so far!)
That’s certainly a big part of it (see my reply to sibling comment for more). It’s not all of it, though. I listed some questions in my initial comment asking for an explanation of what moral realism is; I’ll want to revisit them (as well as a couple of others that’ve occurred to me), once the entire sequence (or, at least, this upcoming post you mention) is posted.
Certainly understandable.
Although—if, indeed, the term is used in different ways by different people (as seems likely enough), then perhaps it might make sense, instead of trying to explain “moral uncertainty”, rather to clearly separate the concepts labeled by this term into distinct buckets, and explain them separately.
Then again, it’s hard for me to judge any of these explanations too confidently, given, as you say, the “unfolding” dynamic… we will see, I suppose, what I think of the whole thing, when it’s all posted!
(I don’t think there’s a conflict between what I say in this comment and what you said in yours—I think they’re different points, but not inconsistent.)
I think I understand what you mean, and sympathise. To be clear, I’m trying to explain what existing concepts are meant to mean, and how they seem to relate to each other, rather than putting forward new concepts or arguing that these are necessarily good, coherent, useful concepts that carve reality at the joints.
This is why I say “I hope this will benefit readers by facilitating clearer thinking and discussion.” Even if we aren’t sure these are useful concepts, they definitely are used, both on LessWrong and elsewhere, and so it seems worth us getting more on the same page with what we’re even trying to say. That may in turn also help us work out whether what we’re trying to say is actually empty, and pointing to nothing in the real world at all.
I think, though I’m not sure, that a lot of the question of how coherent some of these concepts really are—or how much they point at real things—also comes down the question of whether non-naturalistic moral realism makes sense or is true. (If it doesn’t, we can still have a coherent concept that we call “moral uncertainty”, along the lines of what coherent extrapolated volition is about, but it seems to me—though I could be wrong—to be something substantively different.)
Personally, I’ve got something like a Pascal’s wager going on that front—it seems hard for me to even imagine what it would mean for non-naturalistic moral realism to be true, and thus very unlikely that it is true, but it seems worth acting as if it’s true anyway. (I’m not sure if this reasoning actually makes sense—I plan to write a post about it later.)
But in any case, whether due to that sort of Pascal’s wager, or a more general sense of epistemic humility (as it seems a majority of philosophers are moral realists), or even to clarify and thus facilitate conversations that may ultimately kill our current concept of moral uncertainty, it seems worth clarifying what we’re even trying to say with the terms and concepts we use.
This is my view also (except that I would probably drop even the “non-naturalistic” qualifier; I’m unsure of this, because I haven’t seen this term used consistently in the literature… what is your preferred reference for what is meant by “naturalistic” vs. “non-naturalistic” moral realism?).
I would like to read such a post, certainly. I find your comment here interesting, because there’s a version of this sort of view (“worth acting as if it’s true anyway”) that I find to be possibly reasonable—but it’s not one I’d ever describe as a “Pascal’s wager”! So perhaps you mean something else by it, which difference / conceptual collision seems worth exploring.
In any case, I agree that clarifying the terms as they are used is worthwhile. (Although one caveat is that if a term / concept is incoherent, there is an upper limit to how much clarity can be achieved in discerning how the term is used! But even in this case, the attempt is worthy.)
This, too, seems worth writing about!
Glad to hear you think so! That’s roughly what the post (mentioned in my other comment) which I hope to finish by early next week will be about.
I think that’s true, but also that additional valiant attempts to clarify incoherent terms that still leave them seeming very unclear and incoherent might help us gain further evidence that the terms are worth abandoning entirely. Sort of like just trying a cure for some disease and finding it fails, so we can rule that out, rather than theorising about why that cure might not work (which could also be valuable).
(That said, that wasn’t my explicit intention when I wrote this post—it just came to mind as an interesting possible bonus and/or rationalisation when I read your comment.)
Is your version of this sort of view something more like the idea that it should all “add up to normality” in the end, and that moral antirealism should be able to “rescue” our prior intuitions about morality anyway, so we should still end up valuing basically the same things whether or not realism is true?
If so, that’s also something I find fairly compelling. And I think it’ll often lead to similar actions in effect. But I do expect some differences could occur. E.g., I’m very concerned about the idea of designing an AGI that implements coherent extrapolated volition, even if it all goes perfectly as planned, because I see it as quite possible, and possibly extremely high stakes, that there really is some sort of “moral truth” that’s not at all grounded in what humans value. (That is, something that may or may not overlap or be correlated with what we value, but doesn’t come from the fact that we value certain things.)
I’m not saying I have a better alternative, because I do find compelling the arguments along the lines of “We can’t just tell an AGI to find the moral truth and act on it, because ‘moral truth’ isn’t a clear enough concept and there may be no fundamental thing that matches that idea out there in the world.” But I’d ideally like us to hold back on trying to implement a strategy based on moral antirealism or on assuming moral realism + that the ‘moral truth’ will be naturally findable by an AGI, because I “moral truth” as at least possible a coherent and reality-matching concept. (In practice, we may need to just lock something in to avoid some worse lock in, and CEV may be the best we’ve got. But I don’t think it’s just obvious that that’s definitely all there is to morality, and that we should happily move towards CEV as fast as we can.)
I’m more confident in the above ideas than I am in my Pascal’s wager type thing. The Pascal’s wager type thing is something a bit stronger—not just acting as if uncertain, but acting pretty much as if non-naturalistic moral realism actually is true, because if it is “the stakes are so much higher” than if it isn’t. This seems to come from me sort of conflating nihilism and moral antirealism, which seems rejected in various LessWrong posts and also might differ from standard academic metaethics, but it still seems to me that there might be something to that. But again, these are half-formed, low-confidence thoughts at the moment.
As a general point, I have a half-formed thought along the lines of “Metaethics—and to some extent morality—is like a horrible stupid quagmire of wrong questions, at least if we take non-naturalistic moral realism seriously, but unfortunately it seems like the one case in which we may have to just wade through that as best we can rather than dissolving it.” (I believe Eliezer has written against the second half of that view, but I currently don’t find his points there convincing. But I’m quite unsure about all this.)
The relevance here being that I’d agree that the terms are used far from consistently, and perhaps that’s because we’re just totally confused about what we’re even trying to say.
But that being said, I think a good discussion of naturalistic vs non-naturalistic realism, and indication of why I added the qualifier in the above sentences, can be found in footnote 15 of this post. E.g. (but the whole footnote is worth reading):